7 research outputs found

    Analysis of New L5 Algorithm Embedded with Modified AES Algorithm in Address Allocation Schemes

    Get PDF
    Networking is ubiquitous. In this fast-moving world, routing plays a major role for a myriad of purposes in the internet space. Address allocation is a vital component in routing. It is very much important to route the packets securely to the preferred destination. The routing of packets is not as efficient as it should have been. The current routing process is tedious and time consuming. The vulnerabilities found in the AES algorithm have been exploited by hackers and sniffers. In order to overcome these backlogs, two new mathematically validated approaches, namely, new L5 routing algorithm and modifiedAES algorithm are proposed in this article. The proposed schemes circumscribe the possibilities of hacking. The mathematical exploration of the proposed schemes results in reduced time and space complexities. The newly proposed routing algorithm achieves fault tolerance, secure transmission of data and provides congestion contro

    DPD-InfoGAN: Differentially Private Distributed InfoGAN

    Full text link
    Generative Adversarial Networks (GANs) are deep learning architectures capable of generating synthetic datasets. Despite producing high-quality synthetic images, the default GAN has no control over the kinds of images it generates. The Information Maximizing GAN (InfoGAN) is a variant of the default GAN that introduces feature-control variables that are automatically learned by the framework, hence providing greater control over the different kinds of images produced. Due to the high model complexity of InfoGAN, the generative distribution tends to be concentrated around the training data points. This is a critical problem as the models may inadvertently expose the sensitive and private information present in the dataset. To address this problem, we propose a differentially private version of InfoGAN (DP-InfoGAN). We also extend our framework to a distributed setting (DPD-InfoGAN) to allow clients to learn different attributes present in other clients' datasets in a privacy-preserving manner. In our experiments, we show that both DP-InfoGAN and DPD-InfoGAN can synthesize high-quality images with flexible control over image attributes while preserving privacy

    Gradient Masked Averaging for Federated Learning

    Full text link
    Federated learning (FL) is an emerging paradigm that permits a large number of clients with heterogeneous data to coordinate learning of a unified global model without the need to share data amongst each other. A major challenge in federated learning is the heterogeneity of data across client, which can degrade the performance of standard FL algorithms. Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server. However, we argue that in heterogeneous settings, averaging can result in information loss and lead to poor generalization due to the bias induced by dominant client gradients. We hypothesize that to generalize better across non-i.i.d datasets, the algorithms should focus on learning the invariant mechanism that is constant while ignoring spurious mechanisms that differ across clients. Inspired from recent works in Out-of-Distribution generalization, we propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates. This aggregation technique for client updates can be adapted as a drop-in replacement in most existing federated algorithms. We perform extensive experiments on multiple FL algorithms with in-distribution, real-world, feature-skewed out-of-distribution, and quantity imbalanced datasets and show that it provides consistent improvements, particularly in the case of heterogeneous clients

    A Practical Approach to Federated Learning

    No full text
    Machine learning models benefit from large and diverse training datasets. However, it is difficult for an individual organization to collect sufficiently diverse data. Additionally, the sensitivity of the data and government regulations such as GDPR, HIPPA, and CCPA restrict how organizations can share data with other entities. This forces organizations with sensitive datasets to develop models that are only locally optimal. Federated learning (FL) facilitates robust machine learning by enabling the development of global models without sharing sensitive data. However, there are two broad challenges associated with deploying FL systems: privacy challenges and training/performance-related challenges. Privacy challenges pertain to attacks that reveal sensitive information of local client data. Training/Performance-related challenges include high communication costs, data heterogeneity across clients, and lack of personalization techniques. All these concerns have to be addressed to make FL practical, scalable, and useful. In this thesis, I discuss techniques I've designed for addressing these challenges and describe two systems that I've developed to mitigate them - PrivacyFL, a privacy-preserving simulator for FL, and DynamoFL, an easy-to-use production-level system for FL.Ph.D

    Improving the adaptability of differential privacy

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 55-56).Differential privacy is a mathematical technique that provides strong theoretical privacy guarantees by ensuring statistical indistinguishability of individuals in a dataset. It has become the de facto framework for providing privacy-preserving data analysis over statistical datasets. Differential privacy has garnered significant attention from researchers and privacy experts due to its strong privacy guarantees. However, the lack of flexibility due to the dearth of configurable parameters in existing mechanisms, the accuracy loss caused by the noise added, and problems with choosing a suitable value of the privacy parameter, E, have prevented its widespread adoption in the industry. In this thesis, I address these issues. In differential privacy, the standard approach is to add Laplacian noise to the output of queries. I propose new probability distributions and noise adding mechanisms that preserve ([epsilon])-differential privacy and ([epsilon], [delta])-differential privacy.The distributions can be observed as an asymmetric Laplacian distribution and a generalized truncated Laplacian distribution. I show that the proposed mechanisms add optimal noise in a global context, conditional upon technical lemmas. In addition, I also show that the proposed mechanisms have greater adaptability than the Laplacian mechanism as there is more than one parameter to adjust. I then demonstrate that the generalized truncated Laplacian mechanism performs better than the optimal Gaussian mechanism. The presented mechanisms are highly useful as they enable data controllers to fine-tune the perturbation necessary to protect privacy to use case specific distortion requirements. The second issue addressed in this thesis is to identify an optimal value of E and specify bounds on it. E is used to quantify the privacy risk posed by revealing statistics calculated on private and sensitive data.Though it has an intuitive theoretical explanation, choosing an appropriate value is non-trivial. I present a systematic and methodical way to calculate e once the necessary constraints are given. In order to derive context-specific optimal values and an upper bound on E, I use the confidence probability approach, Chebyshev's inequality, and McDiarmid's inequality.by Vaikkunth Mugunthan.S.M.S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy

    No full text
    Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server. Prior works do not provide efficient solutions that protect against collusion attacks in which parties collaborate to expose an honest client's model parameters. We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the "Sybil" attack in which a server preferentially selects compromised devices or simulates fake devices. We leverage the novel privacy mechanism to construct a secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients
    corecore